Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Chinese character relation extraction model based on pre-training and multi-level information
Bowen YAO, Biqing ZENG, Jian CAI, Meirong DING
Journal of Computer Applications    2021, 41 (12): 3637-3644.   DOI: 10.11772/j.issn.1001-9081.2021010090
Abstract415)   HTML17)    PDF (822KB)(145)       Save

Relation extraction task aims to extract the relationship between entity pairs from text, which is one of the hot directions in the field of Natural Language Processing (NLP). In view of the problem that the grammar structure is complex and the semantic features of the text cannot be learned effectively in Chinese character relation extraction corpus, a Chinese Character Relation Extraction model based on Pre-training and Multi-level Information (CCREPMI) was proposed. Firstly, the word vectors were generated by using the powerful semantic representation ability of the pre-trained model. Then, the original sentence was divided into sentence level, entity level and entity adjacent level for feature extraction. Finally, the relation classification and prediction were performed by the information fusion of the sentence structure features, entity meanings and dependency between entities and adjacent words. Experimental results on the Chinese character relationship dataset show that the proposed model has the precision of 81.5%, the recall of 82.3%, and the F1 value of 81.9%, showing an improvement compared to the baseline models such as BERT (Bidirectional Encoder Representations from Transformers) and BERT-LSTM (BERT-Long Short-Term Memory). Moreover, the F1 score of this model on SemEval2010-task8 English dataset reaches 81.2%, indicating its ability to generalize to the English corpus.

Table and Figures | Reference | Related Articles | Metrics